Alejandro Piad Morffis explains Why reliable AI requires a paradigm shift

Hallucinations are not a bug. They’re an unavoidable consequence, or to put it more bluntly, a feature of the statistical underpinnings that make these models efficient.

Sometimes hallucinations are caused by bad training data. Technically this isn’t an hallucination.

 In more technical terms, the statistical model assumes a smooth distribution, which is necessary because the size of the data the model needs to encode is orders of magnitude larger than the memory (i.e., the number of parameters) in the model. Thus, the models must compress the training corpus, and compression implies losing some information.

Recent research suggests that if there is a sentence that can be generated at all, no matter how low its base probability, then there is a prompt that will generate it with almost 100% certainty.